Dog Breed Classifier with Convolutional Neural Networks

Project introduction

The goal of the notebook is to develop an algorithm that accept any user-supplied image and returns the predicted dog breed. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling.

The notebook pieces together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed.

The Road Ahead

The notebook is separated into the following steps:

  • Step 0: Import Datasets
  • Step 1: Detect Humans
  • Step 2: Detect Dogs
  • Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
  • Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)
  • Step 5: Write your Algorithm
  • Step 6: Test Your Algorithm

Step 0: Import Datasets

Make sure that you've downloaded the required human and dog datasets:

Note: if you are using the Udacity workspace, you DO NOT need to re-download these - they can be found in the /data folder as noted in the cell below.

  • Download the dog dataset. Unzip the folder and place it in this project's home directory, at the location /dog_images.

  • Download the human dataset. Unzip the folder and place it in the home directory, at location /lfw.

In the code cell below, we save the file paths for both the human (LFW) dataset and dog dataset in the numpy arrays human_files and dog_files.

In [1]:
import numpy as np
from glob import glob

# load filenames for human and dog images
human_files = np.array(glob("/data/lfw/*/*"))
dog_files = np.array(glob("/data/dog_images/*/*/*"))

# print number of images in each dataset
print('There are %d total human images.' % len(human_files))
print('There are %d total dog images.' % len(dog_files))
There are 13233 total human images.
There are 8351 total dog images.

Step 1: Detect Humans

In this section, we use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images.

OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the haarcascades directory. In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.

In [2]:
import cv2                
import matplotlib.pyplot as plt                        
%matplotlib inline                               

# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')

# load color (BGR) image
img = cv2.imread(human_files[0])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# find faces in image
faces = face_cascade.detectMultiScale(gray)

# print number of faces detected in the image
print('Number of faces detected:', len(faces))

# get bounding box for each detected face
for (x,y,w,h) in faces:
    # add bounding box to color image
    cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
    
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
Number of faces detected: 1

Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The detectMultiScale function executes the classifier stored in face_cascade and takes the grayscale image as a parameter.

In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.

Write a Human Face Detector

We can use this procedure to write a function that returns True if a human face is detected in an image and False otherwise. This function, named face_detector, takes a string-valued file path to an image as input and appears in the code block below.

In [3]:
def face_detector(img_path):
    """Returns whether the image stored at img_path has a human face or not"""
    img = cv2.imread(img_path)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray)
    return len(faces) > 0

Assess the Human Face Detector

Question 1: Use the code cell below to test the performance of the face_detector function.

  • What percentage of the first 100 images in human_files have a detected human face?
  • What percentage of the first 100 images in dog_files have a detected human face?

Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays human_files_short and dog_files_short.

Answer:

In [4]:
human_files_short = human_files[:100]
dog_files_short = dog_files[:100]

n_faces = lambda images: sum([face_detector(img_path) for img_path in images])
print("Number of human faces in 'human_files_short':", n_faces(human_files_short))
print("Number of human faces in 'dog_files_short':", n_faces(dog_files_short))
Number of human faces in 'human_files_short': 98
Number of human faces in 'dog_files_short': 17
In [5]:
def show_img_with_detected_face(img_path):
    """Show human faces in dog images"""
    img = cv2.imread(img_path)
    # convert BGR image to grayscale
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    # find faces in image
    faces = face_cascade.detectMultiScale(gray)
    # print number of faces detected in the image
    print('Number of faces detected:', len(faces))
    # get bounding box for each detected face
    for (x,y,w,h) in faces:
        # add bounding box to color image
        cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
    # convert BGR image to RGB for plotting
    cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    # display the image, along with bounding box
    plt.imshow(cv_rgb)
    plt.show()


[show_img_with_detected_face(img_path) for img_path in dog_files_short if face_detector(img_path)];
Number of faces detected: 1
Number of faces detected: 1
Number of faces detected: 3
Number of faces detected: 1
Number of faces detected: 1
Number of faces detected: 1
Number of faces detected: 1
Number of faces detected: 1
Number of faces detected: 1
Number of faces detected: 1
Number of faces detected: 1
Number of faces detected: 2
Number of faces detected: 1
Number of faces detected: 1
Number of faces detected: 1
Number of faces detected: 1
Number of faces detected: 1

Step 2: Detect Dogs

In this section, we use a pre-trained model to detect dogs in images.

Obtain Pre-trained VGG-16 Model

The code cell below downloads the VGG-16 model, along with weights that have been trained on ImageNet, a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of 1000 categories.

In [6]:
import torch
import torchvision.models as models

# define VGG16 model
VGG16 = models.vgg16(pretrained=True)

# check if CUDA is available
use_cuda = torch.cuda.is_available()

# move model to GPU if CUDA is available
if use_cuda:
    VGG16 = VGG16.cuda()
Downloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to /root/.torch/models/vgg16-397923af.pth
100%|██████████| 553433881/553433881 [00:09<00:00, 58110993.03it/s]

Given an image, this pre-trained VGG-16 model returns a prediction (derived from the 1000 possible categories in ImageNet) for the object that is contained in the image.

Making Predictions with a Pre-trained Model

The next code cell specifies a function that accepts a path to an image (such as 'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg') as input and returns the index corresponding to the ImageNet class that is predicted by the pre-trained VGG-16 model. The output should always be an integer between 0 and 999, inclusive.

In [7]:
from PIL import Image, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True  # https://stackoverflow.com/a/23575424

import torchvision.transforms as transforms

def VGG16_predict(img_path):
    '''
    Use pre-trained VGG-16 model to obtain index corresponding to 
    predicted ImageNet class for image at specified path
    
    Args:
        img_path: path to an image
        
    Returns:
        Index corresponding to VGG-16 model's prediction
    '''
    normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                     std=[0.229, 0.224, 0.225])
    data_transforms = transforms.Compose([
        transforms.RandomResizedCrop(224),
        transforms.ToTensor(),
        normalize
    ])
    img = Image.open(img_path)
    img_tensor = data_transforms(img)
    img_tensor = img_tensor.unsqueeze(0)
    
    VGG16.eval()
    if use_cuda:
        img_tensor = img_tensor.cuda()
    output = VGG16(img_tensor)
    _, pred = torch.max(output, 1)
    return pred[0]


dog_idx = 25
VGG16_predict(dog_files[dog_idx])
Out[7]:
tensor(243, device='cuda:0')
In [8]:
img = cv2.imread(dog_files[dog_idx])
plt.imshow(img)
plt.show()

Write a Dog Detector

The dictionary contains the dog categories in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, representing all categories from 'Chihuahua' to 'Mexican hairless'. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained VGG-16 model, we need only check if the pre-trained model predicts an index between 151 and 268 (inclusive).

In [9]:
def dog_detector(img_path, predictor):
    """Determines whether a dog is detected in the image stored at img_path or not"""
    first_dog_index = 151
    last_dog_index = 268
    predicted = predictor(img_path)
    if use_cuda:
        predicted = predicted.cpu()
    return first_dog_index <= predicted.numpy() <= last_dog_index

Assess the Dog Detector

Question 2: Test the performance of your dog_detector function.

  • What percentage of the images in human_files_short have a detected dog?
  • What percentage of the images in dog_files_short have a detected dog?

Answer:

In [10]:
def percentage_of_dogs(imgs):
    return sum([dog_detector(img_path, VGG16_predict) for img_path in imgs]) / len(imgs)*100


print("Percentage of images in `human_files_short` with a detected dog:", percentage_of_dogs(human_files_short))
print("Percentage of images in `dog_files_short` with a detected dog:", percentage_of_dogs(dog_files_short))
Percentage of images in `human_files_short` with a detected dog: 0.0
Percentage of images in `dog_files_short` with a detected dog: 96.0

The code cell below tests the ResNet50 pre-trained PyTorch model:

In [11]:
resnet50 = models.resnet50(pretrained=True)
if use_cuda:
    resnet50 = resnet50.cuda()
    
def resnet50_predict(img_path):
    '''
    Use pre-trained Resnet-50 model to obtain index corresponding to 
    predicted ImageNet class for image at specified path
    
    Args:
        img_path: path to an image
        
    Returns:
        Index corresponding to the Resnet-50 model's prediction
    '''
    normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                     std=[0.229, 0.224, 0.225])
    data_transforms = transforms.Compose([
        transforms.RandomResizedCrop(224),
        transforms.ToTensor(),
        normalize
    ])
    img = Image.open(img_path)
    img_tensor = data_transforms(img)
    img_tensor = img_tensor.unsqueeze(0)
    
    resnet50.eval()
    if use_cuda:
        img_tensor = img_tensor.cuda()
    output = resnet50(img_tensor)
    _, pred = torch.max(output, 1)
    return pred[0] # predicted class index


percentage_of_dogs = lambda imgs: sum([dog_detector(img_path, resnet50_predict) for img_path in imgs]) / len(imgs)*100
print("Percentage of images in `human_files_short` with a detected dog:", percentage_of_dogs(human_files_short))
print("Percentage of images in `dog_files_short` with a detected dog:", percentage_of_dogs(dog_files_short))
Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /root/.torch/models/resnet50-19c8e357.pth
100%|██████████| 102502400/102502400 [00:01<00:00, 67025824.54it/s]
Percentage of images in `human_files_short` with a detected dog: 1.0
Percentage of images in `dog_files_short` with a detected dog: 98.0

Step 3: Create a CNN to Classify Dog Breeds (from Scratch)

Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. This step creates a CNN that classifies dog breeds. Such a CNN must be from scratch (we can't use transfer learning yet!), and must attain a test accuracy of at least 10%. Step 4 of this notebook performs a similar task although using transfer learning to create a CNN that attains greatly improved accuracy.

The task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that even a human would have trouble distinguishing between a Brittany and a Welsh Springer Spaniel.

Brittany | Welsh Springer Spaniel

  • |

It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).

Curly-Coated Retriever | American Water Spaniel

  • |

Likewise, recall that labradors come in yellow, chocolate, and black.

Yellow Labrador | Chocolate Labrador | Black Labrador

  • | |

Random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.

Specify Data Loaders for the Dog Dataset

The code cell below specifies three separate data loaders for the training, validation, and test datasets of dog images (located at dog_images/train, dog_images/valid, and dog_images/test, respectively). The documentation on custom datasets might be a useful resource. For augmenting the training and/or validation data, check out the wide variety of transforms.

In [12]:
%ls /data/dog_images/train | wc -l
133
In [13]:
from torchvision import datasets

# The batch size was initially set to 256 according to Simonyan and Zisserman (2015): https://arxiv.org/pdf/1409.1556.pdf
# However, the model run out of memory. It was therefore decreased to 128 and then to 64.
batch_size = 64  
standard_means = (0.485, 0.456, 0.406)  # ImageNet normalization values: https://stackoverflow.com/a/58151903
standard_stds = (0.229, 0.224, 0.225)

n_classes = 133

train_transforms = transforms.Compose([
#     transforms.Resize(256),
    transforms.RandomResizedCrop(224),
    transforms.RandomHorizontalFlip(),
    transforms.RandomRotation(15),
    transforms.ToTensor(),
    transforms.Normalize(standard_means, standard_stds)
])

test_transforms = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize(standard_means, standard_stds)
])


train_data = datasets.ImageFolder('/data/dog_images/train/', transform=train_transforms)
valid_data = datasets.ImageFolder('/data/dog_images/valid/', transform=test_transforms)
test_data = datasets.ImageFolder('/data/dog_images/test/', transform=test_transforms)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, shuffle=True)
valid_loader = torch.utils.data.DataLoader(valid_data, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, shuffle=True)
loaders = {'train': train_loader, 'valid': valid_loader, 'test': test_loader}

Question 3: Describe the procedure for preprocessing the data.

  • How does your code resize the images (by cropping, stretching, etc)? What size did you pick for the input tensor, and why?

  • Did you decide to augment the dataset? If so, how (through translations, flips, rotations, etc)? If not, why not?

Answer:

  • The input size for the network has been set to 224x224, according to the values used in the literature (https://pytorch.org/vision/stable/models.html, https://arxiv.org/pdf/1409.1556.pdf).
  • The process to obtain this dimension in the training set consists of randomly cropping the images and resize them to 224x224.
  • As for the validation and test sets, the code resizes the images to 256x256 and then crops them at their center using a 224x224 square.
  • Furthermore, the code performs a data augmentation step to have rotation and translation invariance in the training dataset. This step consists of a random horizontal flip followed by a random rotation of 15 degrees.

Model Architecture

Create a CNN to classify dog breed.

  • Shape = (W_in - F + 2P) / S + 1
    • W_in = the width/height (square) of the previous layer
    • F = Filter size
    • P = Padding
    • S = Stride
In [14]:
shape = lambda w_in, f, p, s: (w_in - f + 2*p) / s + 1
shape(256, 2, 0, 2)
Out[14]:
128.0
In [15]:
import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 64, 3, stride=1, padding=1)
        self.conv2 = nn.Conv2d(64, 128, 3, stride=1, padding=1)
        self.conv3 = nn.Conv2d(128, 256, 3, stride=1, padding=1)
        self.conv4 = nn.Conv2d(256, 256, 3, stride=1, padding=1)
        
        self.pool = nn.MaxPool2d(2, stride=2)
        self.dropout = nn.Dropout(p=0.2)
        
        self.fc1 = nn.Linear(256*14*14, 512)
        self.fc2 = nn.Linear(512, 256)
        self.fc3 = nn.Linear(256, n_classes)
    
    def forward(self, x):
        x = F.relu(self.conv1(x))  # In 3x224x224  | Out 64x224x224
        x = self.pool(x)           # In 64x224x224 | Out 64x112x112
        x = F.relu(self.conv2(x))  # In 64x112x112 | Out 128x112x112
        x = self.pool(x)           # In 128x112x112 | Out 128x56x56
        x = F.relu(self.conv3(x))  # In 128x56x56  | Out 256x56x56
        x = self.pool(x)           # In 256x56x56   | Out 256x28x28
        x = F.relu(self.conv4(x))  # In 256x28x28   | Out 256x28x28
        x = self.pool(x)           # In 256x28x28   | Out 256x14x14
        x = x.view(-1, 256*14*14)
        x = F.relu(self.fc1(x))
        x = self.dropout(x)
        x = F.relu(self.fc2(x))
        x = self.dropout(x)
        return F.log_softmax(self.fc3(x), dim=1)

# instantiate the CNN
model_scratch = Net()

# move tensors to GPU if CUDA is available
if use_cuda:
    model_scratch.cuda()

Question 4: Outline the steps you took to get to your final CNN architecture and your reasoning at each step.

Answer:

The model architecture was based on the work of Simonyan and Zisserman (2015) with simplifications to reduce complexity and computation resources since the goal is to obtain at least 10% of accuracy. Particularly, the architecture follows the idea of the authors' ConvNet configuration A (Table 1 of the paper), using up to the third block of convolutional layers.

My architecture has 4 convolution and max pooling combinations with 64, 128, 256, and 256 filters. With this layout, the network is able to capture different image features' patterns, starting with 64 filters for simple patterns and scaling to 256 for more complex ones. After these combinations, the network introduces two fully-connected layers, each one followed by a dropout layer aiming at reducing potential overfitting. Finally, the last fully-connected layer takes the output of the the previous layer to predict the image class using a log softmax function. Except for the final layer, all other layers use a ReLu activation function.

Specify Loss Function and Optimizer

The next code cell specifies a loss function and optimizer.

In [16]:
import torch.optim as optim

criterion_scratch = nn.NLLLoss()
optimizer_scratch = optim.Adam(params=model_scratch.parameters(), lr=0.001)  # Tested LR: 0.01, 0.005, 0.001

loaders_scratch = loaders

Train and Validate the Model

In [17]:
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
    """Returns a trained model"""
    # initialize tracker for minimum validation loss
    valid_loss_min = np.Inf 
    
    for epoch in range(1, n_epochs+1):
        # initialize variables to monitor training and validation loss
        train_loss = 0.0
        valid_loss = 0.0
        
        ###################
        # train the model #
        ###################
        model.train()
        for batch_idx, (data, target) in enumerate(loaders['train']):
            # move to GPU
            if use_cuda:
                data, target = data.cuda(), target.cuda()
            optimizer.zero_grad()
            
            output = model(data)
            loss = criterion(output, target)
            loss.backward()
            optimizer.step()
            
            train_loss += loss.item()  # Avg. loss for the batch
            
        ######################    
        # validate the model #
        ######################
        model.eval()
        for batch_idx, (data, target) in enumerate(loaders['valid']):
            # move to GPU
            if use_cuda:
                data, target = data.cuda(), target.cuda()
            output = model(data)
            loss = criterion(output, target)
            valid_loss += loss.item()

        train_loss /= len(loaders['train'])
        valid_loss /= len(loaders['valid'])
            
        # print training/validation statistics 
        print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
            epoch, 
            train_loss,
            valid_loss
            ))
        
        ## save the model if validation loss has decreased
        if valid_loss < valid_loss_min:
            print("Saving the model...")
            valid_loss_min = valid_loss
            torch.save(model.state_dict(), save_path)
    return model
In [18]:
model_scratch = train(30, loaders_scratch, model_scratch, optimizer_scratch, 
                      criterion_scratch, use_cuda, 'model_scratch.pt')
Epoch: 1 	Training Loss: 4.888742 	Validation Loss: 4.878490
Saving the model...
Epoch: 2 	Training Loss: 4.804955 	Validation Loss: 4.684582
Saving the model...
Epoch: 3 	Training Loss: 4.721564 	Validation Loss: 4.635341
Saving the model...
Epoch: 4 	Training Loss: 4.670323 	Validation Loss: 4.532006
Saving the model...
Epoch: 5 	Training Loss: 4.566479 	Validation Loss: 4.384636
Saving the model...
Epoch: 6 	Training Loss: 4.460677 	Validation Loss: 4.290301
Saving the model...
Epoch: 7 	Training Loss: 4.392174 	Validation Loss: 4.218230
Saving the model...
Epoch: 8 	Training Loss: 4.324760 	Validation Loss: 4.124851
Saving the model...
Epoch: 9 	Training Loss: 4.263491 	Validation Loss: 4.128153
Epoch: 10 	Training Loss: 4.191592 	Validation Loss: 4.039403
Saving the model...
Epoch: 11 	Training Loss: 4.147696 	Validation Loss: 3.924989
Saving the model...
Epoch: 12 	Training Loss: 4.093584 	Validation Loss: 3.933133
Epoch: 13 	Training Loss: 4.045769 	Validation Loss: 3.914472
Saving the model...
Epoch: 14 	Training Loss: 4.009339 	Validation Loss: 3.942009
Epoch: 15 	Training Loss: 3.960105 	Validation Loss: 3.799138
Saving the model...
Epoch: 16 	Training Loss: 3.929384 	Validation Loss: 3.736322
Saving the model...
Epoch: 17 	Training Loss: 3.912054 	Validation Loss: 3.706832
Saving the model...
Epoch: 18 	Training Loss: 3.863767 	Validation Loss: 3.670433
Saving the model...
Epoch: 19 	Training Loss: 3.842039 	Validation Loss: 3.633750
Saving the model...
Epoch: 20 	Training Loss: 3.806404 	Validation Loss: 3.567941
Saving the model...
Epoch: 21 	Training Loss: 3.769682 	Validation Loss: 3.705639
Epoch: 22 	Training Loss: 3.753393 	Validation Loss: 3.668056
Epoch: 23 	Training Loss: 3.712307 	Validation Loss: 3.608768
Epoch: 24 	Training Loss: 3.672012 	Validation Loss: 3.544251
Saving the model...
Epoch: 25 	Training Loss: 3.639339 	Validation Loss: 3.465901
Saving the model...
Epoch: 26 	Training Loss: 3.632008 	Validation Loss: 3.603103
Epoch: 27 	Training Loss: 3.601083 	Validation Loss: 3.444988
Saving the model...
Epoch: 28 	Training Loss: 3.562649 	Validation Loss: 3.361953
Saving the model...
Epoch: 29 	Training Loss: 3.561537 	Validation Loss: 3.357720
Saving the model...
Epoch: 30 	Training Loss: 3.504383 	Validation Loss: 3.247425
Saving the model...

Test the Model

The code cell below calculates and prints the test loss and accuracy.

In [19]:
# load the model that got the best validation accuracy
model_scratch.load_state_dict(torch.load('model_scratch.pt'))


def test(loaders, model, criterion, use_cuda, use_log=False):
    """Tests the classifier monitoring the loss and accuracy"""
    test_loss = 0.
    correct = 0.
    total = 0.

    model.eval()
    for batch_idx, (data, target) in enumerate(loaders['test']):
        # move to GPU
        if use_cuda:
            data, target = data.cuda(), target.cuda()
        # forward pass: compute predicted outputs by passing inputs to the model
        output = model(data)
        # calculate the loss
        loss = criterion(output, target)
        # update average test loss 
        test_loss += loss.item()
        # convert output probabilities to predicted class
        if use_log:
            output = torch.exp(output)
        pred = output.data.max(1, keepdim=True)[1]
        # compare predictions to true label
        correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
        total += data.size(0)
    
    test_loss /= len(loaders['test'])
    print('Test Loss: {:.6f}\n'.format(test_loss))

    print('\nTest Accuracy: %2d%% (%2d/%2d)' % (
        100. * correct / total, correct, total))


test(loaders_scratch, model_scratch, criterion_scratch, use_cuda, use_log=True)
Test Loss: 3.278857


Test Accuracy: 18% (154/836)

Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)

This section uses transfer learning to create a CNN that can identify dog breed from images with a target accuracy of at least 60% on the test set.

Specify Data Loaders for the Dog Dataset

In [20]:
loaders_transfer = loaders

Model Architecture

Using transfer learning to create a CNN to classify dog breed.

In [21]:
import torchvision.models as models
import torch.nn as nn

model_transfer = models.vgg16(pretrained=True)
if use_cuda:
    model_transfer = model_transfer.cuda()
    
print(model_transfer)
VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace)
    (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU(inplace)
    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace)
    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (25): ReLU(inplace)
    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU(inplace)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace)
    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace)
    (2): Dropout(p=0.5)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace)
    (5): Dropout(p=0.5)
    (6): Linear(in_features=4096, out_features=1000, bias=True)
  )
)
In [22]:
for param in model_transfer.parameters():
    param.requires_grad = False
    
model_transfer.classifier[6] = nn.Linear(model_transfer.classifier[6].in_features, n_classes)
if use_cuda:
    model_transfer = model_transfer.cuda()

Question 5: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.

Answer:

For transfer learning, I've decided to use the VGGNet as a feature extractor. The selection has been driven by its well-reported performance on the ImageNet dataset. Since the dog breed dataset can be considered as a subset of ImageNet, the goal is to fix the feature extractor weights and then train a classifier composed of fully-connected layers on top of that. Finally, the output of the network is modified to match the number of dog breeds to classify.

Specify Loss Function and Optimizer

The next code cell specifies a loss function and optimizer.

In [23]:
criterion_transfer = torch.nn.CrossEntropyLoss()
optimizer_transfer = optim.Adam(model_transfer.classifier[6].parameters(), lr=0.001)

Train and Validate the Model

In [24]:
n_epochs = 10
model_transfer = train(n_epochs, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt')
Epoch: 1 	Training Loss: 2.064124 	Validation Loss: 0.562601
Saving the model...
Epoch: 2 	Training Loss: 1.260423 	Validation Loss: 0.510081
Saving the model...
Epoch: 3 	Training Loss: 1.225705 	Validation Loss: 0.461971
Saving the model...
Epoch: 4 	Training Loss: 1.207265 	Validation Loss: 0.416635
Saving the model...
Epoch: 5 	Training Loss: 1.197403 	Validation Loss: 0.428077
Epoch: 6 	Training Loss: 1.142980 	Validation Loss: 0.414103
Saving the model...
Epoch: 7 	Training Loss: 1.180596 	Validation Loss: 0.460751
Epoch: 8 	Training Loss: 1.170238 	Validation Loss: 0.414338
Epoch: 9 	Training Loss: 1.154527 	Validation Loss: 0.489443
Epoch: 10 	Training Loss: 1.114702 	Validation Loss: 0.459510

Test the Model

In [25]:
model_transfer.load_state_dict(torch.load('model_transfer.pt'))

test(loaders_transfer, model_transfer, criterion_transfer, use_cuda)
Test Loss: 0.495819


Test Accuracy: 84% (705/836)

Predict Dog Breed with the Model

The code cell below defines a function that take an image path as input and returns the dog breed (Affenpinscher, Afghan hound, etc) that is predicted by the model.

In [26]:
class_names = [item[4:].replace("_", " ") for item in train_data.classes]


def predict_breed_transfer(img_path):
    """Determine the dog breed of the image stored at img_path according to the trained model"""
    transform = transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))
    ])
    img = Image.open(img_path)
    img_tensor = transform(img).unsqueeze(0)
    if use_cuda:
        img_tensor = img_tensor.cuda()
    model_transfer.eval()
    model_output = model_transfer(img_tensor)
    _, pred = torch.max(model_output, dim=1)
    return class_names[pred[0]] if pred[0] < len(class_names) else "Error: Prediction out of range"

Step 5: Write your Algorithm

This section presents an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,

  • if a dog is detected in the image, return the predicted breed.
  • if a human is detected in the image, return the resembling dog breed.
  • if neither is detected in the image, provide output that indicates an error.

Some sample output for our algorithm is provided below:

Sample Human Output

Write your Algorithm

In [27]:
def run_app(img_path):
    """Determine if the image stored at img_path contains a human or a dog and return the dog breed"""
    pred = predict_breed_transfer(img_path)
    if dog_detector(img_path, VGG16_predict) and not pred.startswith("Error"):
        print(f"Hey! This dog looks like a {pred}.")
    elif face_detector(img_path) and not pred.startswith("Error"):
        print(f"Hey! That human face looks like a {pred}!")
    else:
        print("The model failed to detect human or dog faces in that image." +
              "Please make sure you are using an image of a human or a dog.")
    img = cv2.imread(img_path)
    cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    plt.imshow(cv_rgb)
    plt.show()

Step 6: Test Your Algorithm

This section takes the algorithm of the previous section for a spin! What kind of dog does the algorithm think that you look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?

Test Your Algorithm on Sample Images!

Test the algorithm with at least six local images (at least 2 human and 2 dog images).

Question 6: Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.

Answer:

For this small test, the model correctly predicted the dogs breed. Also, in photos where the dog was a mongrel, the model returned a breed with similar features. It was fun to see the predictions on human faces, trying to understand, through a closer and thorough look, the features that influenced the decision.

Possible improvements:

  • Model training with more epochs (it was observed that the model could have continued enhancing if I'd let it run for more epochs);
  • Perform transfer learning with, say, 5 state-of-the-art algorithms and pick the one with the best validation accuracy;
  • Return the top 3 predicted breeds with their probabilities to analyse the classification confidence. Furthermore, this could be useful to better classify mixed-breed dogs.
In [28]:
# Example images
for file in np.hstack((human_files[:3], dog_files[:3])):
    run_app(file)
Hey! That human face looks like a Beagle!
Hey! That human face looks like a Australian shepherd!
Hey! That human face looks like a Silky terrier!
Hey! This dog looks like a Bullmastiff.
Hey! This dog looks like a Mastiff.
Hey! This dog looks like a Bullmastiff.
In [30]:
custom_pics = np.array(glob("./images/custom_images/*"))


for file in custom_pics:
    run_app(file)
Hey! This dog looks like a Pointer.
Hey! This dog looks like a German shepherd dog.
Hey! That human face looks like a Pharaoh hound!
Hey! This dog looks like a Cavalier king charles spaniel.
Hey! This dog looks like a Golden retriever.
In [ ]: